Online Learning: Random Averages, Combinatorial Parameters, and Learnability

نویسندگان

  • Alexander Rakhlin
  • Karthik Sridharan
  • Ambuj Tewari
چکیده

We study learnability in the online learning model. We define several complexity measures which capture the difficulty of learning in a sequential manner. Among these measures are analogues of Rademacher complexity, covering numbers and fat shattering dimension from statistical learning theory. Relationship among these complexity measures, their connection to online learning, and tools for bounding them are provided. In the setting of supervised learning, finiteness of the introduced scale-sensitive parameters is shown to be equivalent to learnability. The complexities we define also ensure uniform convergence for non-i.i.d. data, extending the uniform Glivenko-Cantelli type results. We conclude by showing online learnability for an array of examples.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Few Notes on Statistical Learning Theory

2 Glivenko-Cantelli Classes 5 2.1 The classical approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 The symmetrization procedure . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 Covering numbers and complexity estimates . . . . . . . . . . . . . . 9 2.2 Combinatorial parameters and covering numbers . . . . . . . . . . . . . . . 12 2.2.1 Uniform entropy and the VC dimen...

متن کامل

Geometric parameters in Learning Theory

3 Uniform measures of complexity 12 3.1 Metric entropy and the combinatorial dimension . . . . . . . . . 12 3.1.1 Binary valued classes . . . . . . . . . . . . . . . . . . . . . 13 3.1.2 Real valued classes . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Random averages and the combinatorial dimension . . . . . . . . 17 3.3 Phase transitions in GC classes . . . . . . . . . . . . . . . . . . ...

متن کامل

Links Between Learning and Optimization: a Brief Tutorial

This report is a brief exposition of some of the important links between machine learning and combinatorial optimization. We explain how efficient ‘learnability’ in standard probabilistic models of learning is linked to the existence of efficient randomized algorithms for certain natural combinatorial optimization problems, and we discuss the complexity of some of these optimization problems.

متن کامل

Equivalences between learning of data and probability distributions, and their applications

Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by [Vitányi and Chater, 2017] and [Bienvenu et al., 2014] has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, le...

متن کامل

Agnostic Online Learning

We study learnability of hypotheses classes in agnostic online prediction models. The analogous question in the PAC learning model [Valiant, 1984] was addressed by Haussler [1992] and others, who showed that the VC dimension characterization of the sample complexity of learnability extends to the agnostic (or ”unrealizable”) setting. In his influential work, Littlestone [1988] described a combi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010